21 research outputs found

    Warping the young stellar disc in the Galactic Centre

    Full text link
    We examine influence of the circum-nuclear disc (CND) upon the orbital evolution of young stars in the Galactic Centre. We show that gravity of the CND causes precession of the orbits which is highly sensitive upon the semi-major axis and inclination. We consider such a differential precession within the context of an ongoing discussion about the origin of the young stars and suggest a possibility that all of them have originated in a thin disc which was partially destroyed due to the influence of the CND during the period of ~6Myr.Comment: proc. conf. "The Universe Under the Microscope - Astrophysics at High Angular Resolution", 21-25 April 2008, Bad Honnef, German

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Measurement of the top quark properties

    No full text
    Title: Measurement of the top quark properties Author: Mgr. Jaroslava Schovancová Department: Institute of Physics, Academy of Sciences of the Czech Re- public Supervisor: RNDr. Jiří Chudoba, Ph.D., Institute of Physics, Academy of Sciences of the Czech Republic Abstract: This Thesis presents the ATLAS experiment measurement of the top quark differential cross-section as a function of pT, mass and rapidity of the t¯t system. A sample of approx. 4.7 fb−1 of the 2011 pp collision data at the center-of-mass energy of 7 TeV was analyzed. The differential spectra shapes in the t¯t system are consistent with the Standard Model and reason- ably described by the event generators. Several activities in the scope of ATLAS Distributed Computing are pre- sented, particularly in the area of operations, monitoring, automation, and development of the PanDA Workload Management System. Such activities aim to improve efficiency and facilitate the use of distributed computing re- sources and therefore contribute in supporting the ATLAS Physics Program. Keywords: ATLAS, top quark, differential cross-section, missing transverse energy, ATLAS Distributed Computin

    Měření vlastností top kvarku

    No full text
    Název práce: Měření vlastností top kvarku Autor: Mgr. Jaroslava Schovancová Katedra: Fyzikální ústav Akademie věd České republiky, v. v. i. Vedoucí disertační práce: RNDr. Jiří Chudoba, Ph.D., Fyzikální ústav Akademie věd České republiky, v. v. i. Abstrakt: Tato disertační práce se zabývá měřením diferenciálního učinného průřezu systému páru top-antitop na experimentu ATLAS jako funkce příčné hybnosti, hmoty a rapidity systému páru kvarků top-antitop. Při měření byl analyzován vzorek přibližně 4.7 fb−1 dat nabraných v roce 2011 při proton-protonových srážkách při energii srážky v těžišťovém systému 7 TeV. Tvary spekter diferenciálního účinného průřezu v systému t¯t jsou konzis- tentní s předpovědí Standardního modelu a dobře popsány dostupnými ge- nerátory. Dále byly prezentovány příspěvky do ATLAS Distributed Computingu, zej- ména v oblastech operace a provozu, monitorování, automatizace, jakož i vývoje Workload Management Systému PanDA. Tyto aktivity mají za cíl zlepšit efektivitu a usnadnit využití distribuovaných výpočetních zdrojů a tím přispívají k podpoře fyzikálního programu experimentu ATLAS. Klíčová slova: ATLAS, top kvark, diferenciální účinný průřez, chybějící...Title: Measurement of the top quark properties Author: Mgr. Jaroslava Schovancová Department: Institute of Physics, Academy of Sciences of the Czech Re- public Supervisor: RNDr. Jiří Chudoba, Ph.D., Institute of Physics, Academy of Sciences of the Czech Republic Abstract: This Thesis presents the ATLAS experiment measurement of the top quark differential cross-section as a function of pT, mass and rapidity of the t¯t system. A sample of approx. 4.7 fb−1 of the 2011 pp collision data at the center-of-mass energy of 7 TeV was analyzed. The differential spectra shapes in the t¯t system are consistent with the Standard Model and reason- ably described by the event generators. Several activities in the scope of ATLAS Distributed Computing are pre- sented, particularly in the area of operations, monitoring, automation, and development of the PanDA Workload Management System. Such activities aim to improve efficiency and facilitate the use of distributed computing re- sources and therefore contribute in supporting the ATLAS Physics Program. Keywords: ATLAS, top quark, differential cross-section, missing transverse energy, ATLAS Distributed ComputingMatematicko-fyzikální fakultaFaculty of Mathematics and Physic

    Determination of the mass distribution in the Galactic centre from the stellar motions

    Get PDF
    We present an implementation of the statistical approach to the stars-gas mass exchange cycle into an N-body code. First, we summarize available data on stellar mass-loss and derive the time-dependency of the mass-loss rate of a single stellar population. Since the adopted probabilistic scheme that served as a basis for our implementation was limited to the linear star formation law while observations seem to suggest a non-linearity, we derive the non-linear star formation scheme. Both sides of the mass exchange cycle are then implemented into the code with stellar and gaseous particles and compared with an analytic recipe to test their reliability. In the next step, the comparison of such an extended statistical approach with deterministic scheme is performed for a fully dynamical model. The aim of the comparison is to explore divergence between both models of different natures. As an illustration of the code application and sensitivity of the resulting galaxy disk dynamics, several comparisons with varying the key parameters are performed

    Federated data storage evolution in HENP: data lakes and beyond

    No full text
    Storage has been identified as the main challenge for the future distributed computing infrastructures: Particle Physics (HL-LHC, DUNE, Belle-II), Astrophysics and Cosmology (SKA, LSST). In particular, the High Luminosity LHC (HL-LHC) will begin operations in the year of 2026 with expected data volumes to increase by at least an order of magnitude as compared with the present systems. Extrapolating from existing trends in disk and tape pricing, and assuming flat infrastructure budgets, the implications for data handling for end-user analysis are significant. HENP experiments need to manage data across a variety of mediums based on the types of data and its uses: from tapes (cold storage) to disks and solid state drives (hot storage) to caches (including world wide access data in clouds and “data lakes”). The DataLake R&D; project aims at exploring an evolution of distributed storage while bearing in mind very high demands of the HL-LHC era. Its primary objective is to optimize hardware usage and operational costs of a storage system deployed across distributed centers connected by fat networks and operated as a single service. Such storage would host a large fraction of the data and optimize the cost, eliminating inefficiencies due to fragmentation. In this talk we will highlight current status of the project, its achievements, interconnection with other research activities in this field like WLCG-DOMA and ATLAS-Google DataOcean, and future plans

    Evolution of HammerCloud to commission CERN Compute resources

    Get PDF
    HammerCloud is a testing service and framework to commission, run continuous tests or on-demand large-scale stress tests, and benchmark computing resources and components of various distributed systems with realistic full-chain experiment workflows. HammerCloud, used by the ATLAS and CMS experiments in production, has been a useful service to commission both compute resources and various components of the complex distributed systems of the LHC experiments, as well as integral partof the monitoring suite that is essential for the computing operations of the experiments and their automation. In this contribution we review recent developments of the HammerCloud service that allow use of HammerCloud infrastructure to test Data Centre resources in the early phases of the infrastructure and services commissioning process. One of thebenefits we believe HammerCloud can provide is to be able to tune the commissioning of the new infrastructure, functional and also stress testing, as well as benchmarking with "standard candle" workflows, with experiment realistic workloads, that can be heavy for CPU, or I/O, or IOPS, or everything together. This extension of HammerCloud has been successfully usedin CERN IT during the prototype phase of the "BEER" Batch on EOS (Evaluation of Resources) project, and is being integrated with the continuous integration/continuous deployment suite for Batch service VMs

    Architecture and prototype of a WLCG data lake for HL-LHC

    No full text
    The computing strategy document for HL-LHC identifies storage as one of the main WLCG challenges in one decade from now. In the naive assumption of applying today's computing model, the ATLAS and CMS experiments will need one order of magnitude more storage resources than what could be realistically provided by the funding agencies at the same cost of today. The evolution of the computing facilities and the way storage will be organized and consolidated will play a key role in how this possible shortage of resources will be addressed. In this contribution we will describe the architecture of a WLCG data lake, intended as a storage service geographically distributed across large data centers connected by fast network with low latency. Will present the experience with our first prototype, showing how the concept, implemented at different scales, can serve different needs, from regional and national consolidation of storage to an international data provisioning service. We will highlight how the system leverages its distributed nature, the economy of scale and different classes of storage to optimise the hardware and operational cost, through a set of policy driven decisions concerning data placement and data retention. We will discuss how the system leverages or interoperates with existing federated storage solutions. We will finally describe the possible data processing models in this environment and present our first benchmarks
    corecore